Goto

Collaborating Authors

 joy buolamwini


The Download: Joy Buolamwini on AI, and Meta's beauty filter lawsuit

MIT Technology Review

AI researcher and activist Joy Buolamwini is best known for a pioneering paper she co-wrote with Timnit Gebru in 2017 which exposed how commercial facial recognition systems often failed to recognize the faces of Black and brown people, especially Black women. Her research and advocacy led companies such as Google, IBM, and Microsoft to improve their software so it would be less biased and back away from selling their technology to law enforcement. Now, Buolamwini has a new target in sight. She is calling for a radical rethink of how AI systems are built. Buolamwini tells MIT Technology Review that, amid the current AI hype cycle, she sees a very real risk of letting technology companies pen the rules that apply to them-- repeating the very mistake that has previously allowed biased and oppressive technology to thrive.


Joy Buolamwini: "We're giving AI companies a free pass"

MIT Technology Review

I can tell Buolamwini finds the cover amusing. She takes a picture of it. Times have changed a lot since 1961. In her new memoir, Unmasking AI: My Mission to Protect What Is Human in a World of Machines, Buolamwini shares her life story. In many ways she embodies how far tech has come since then, and how much further it still needs to go.


Confronting the Biases Embedded in Artificial Intelligence – The Markup

#artificialintelligence

Hardly a day goes by without another revelation of race, gender, and other biases being embedded in artificial intelligence systems. Just this month, for example, Silicon Valley's much-touted AI image generation system DALL-E disclosed that its system exhibits biases including gender stereotypes and tends "to overrepresent people who are White-passing and Western concepts generally." For instance, it produces images of women for the prompt "a flight attendant" and images of men for the prompt "a builder." In the disclosure, OpenAI, the entity that trained DALL-E, says it is only releasing the program to a limited group of users while it works on mitigating bias and other risks. Meanwhile, researchers using machine learning to examine electronic health records found that Black patients were more than twice as likely to be described in derogatory terms (like "resistant" or "noncompliant") in their patient records. And those are the types of records that often make up the raw material for future AI programs, like the one that aimed to predict patient-reported pain from X-ray data but was only able to make successful predictions for White patients.


12 Black Women in AI paving the way for a better world

#artificialintelligence

At The Good AI, we strongly believe Artificial Intelligence (AI) should be inclusive and celebrate diversity. However, AI is also the reflector of its creators and this translates into the reproduction of certain biases into AI products related to race, gender or sexual orientation among others. The following article from the MIT Technology Review explains how. In the light of this, the tech industry has an important responsibility towards society, and the death of George Floyd at the hands of a city police officer in Minneapolis, USA on 25 May 2020, -one in a long series of racists attacks against African Americans -, should urge us to take action. We need to make sure we are not perpetuating and letting racism or any other kind of discrimination take roots in our AI systems.


Turtles all the way down: Why AI's cult of objectivity is dangerous, and how we can be better

#artificialintelligence

This article was contributed by Slater Victoroff, founder and CTO of Indico Data. There is a belief, built out of science fiction and a healthy fear of math, that AI is some infallible judge of objective truth. We tell ourselves that AI algorithms divine truth from data, and that there is no truth higher than the righteous residual of a regression test. For others, the picture is simple: logic is objective, math is logic, AI is math; thus AI is objective. This is not a benign belief.


Olivia P. Walker on LinkedIn: Olay takes on computer algorithms to fight biased beauty standards

#artificialintelligence

Joy Buolamwini is not only intelligent, she is effective and absolutely stunning. From the article: "Olay is launching a new campaign to help end discriminatory computer algorithms that skew standards of beauty, per an announcement emailed to Marketing Dive. The effort coincides with National Coding Week (Sept. The Procter & Gamble-owned brand is also teaming with [computer scientist and] activist Joy Buolamwini, founder of the Algorithmic Justice League, to conduct an audit of its own practices."


Coded Bias: The Film Everybody Needs to Watch

#artificialintelligence

Coded Bias, directed by Shalini Kantayya, is a documentary in the way Artificial Intelligence trails human data with the assistance of algorithms incorporated in sophisticated Machine Learning Models. Although many of the algorithms used today were created in the 80s, we have digitalised our lives, and data, in a massive amount never so accessible in the history of humankind. Adding to that, the increase in processing power by computers and wireless exchange of information by the 5G technology means AI is probably the most powerful technology ever designed. It already has the capacity to individualised strategies to nudge behaviours desired by a third party. It is only visible to the targeted person, leaves no traces and almost unregulated with few exceptions like the GDPR (General Data Protection Regulation).


Bias in facial recognition isn't hard to discover, but it's hard to get rid of

#artificialintelligence

Joy Buolamwini is a researcher at the MIT Media Lab who pioneered research into bias that's built into artificial intelligence and facial recognition. And the way she came to this work is almost a little too on the nose. As a graduate student at MIT, she created a mirror that would project aspirational images onto her face, like a lion or tennis star Serena Williams. But the facial-recognition software she installed wouldn't work on her Black face, until she literally put on a white mask. Buolamwini is featured in a documentary called "Coded Bias," airing tonight on PBS.


Aren't Artificial Intelligence Systems Racist?

#artificialintelligence

No wonder, Artificial Intelligence is the future. We've seen its application in possibly every field now. The problem isn't with the technology, it is with the biasness that goes in, says Timnit Gebru. She goes on to add that it is built in a manner that replicates the white work force that's mostly men-dominated making it. Right from her first lecture in Spain which is by far the world's most important conference on AI till date, she has seen a vast difference in the number of men and women, obviously men being dominant in number.


Objective Algorithms Are a Myth

#artificialintelligence

The protests across the U.S. and around the globe in the wake of the murder of George Floyd have raised awareness about structural inequalities. Though the specific focus has been on police brutality, scholars, activists, and artists are sounding the alarm on how systemic racism has been amplified in other areas like the tech industry, through communication and surveillance technology. In Coded Bias, a documentary by Shalini Kantayya, the director follows MIT Media Lab researcher and Algorithmic Justice League founder Joy Buolamwini as she discovers one of the fundamental problems with facial recognition. While working on a facial recognition art project, Buolamwini realizes that the computer vision software was having trouble tracking her face, but it worked fine when she put on a white mask. It was just the latest evidence of the type of bias that's baked into facial recognition and A.I. systems These technologies often connect back to the dark historical practices of racialized surveillance, eugenics, or physiognomy.